70 research outputs found

    Topological active model optimization by means of evolutionary methods for image segmentation

    Get PDF
    [Abstract] Object localization and segmentation are tasks that have been growing in relevance in the last years. The automatic detection and extraction of possible objects of interest is a important step for a higher level reasoning, like the detection of tumors or other pathologies in medical imaging or the detection of the region of interest in fingerprints or faces for biometrics. There are many different ways of facing this problem in the literature, but in this Phd thesis we selected a particular deformable model called Topological Active Model. This model was especially designed for 2D and 3D image segmentation. It integrates features of region-based and boundary-based segmentation methods in order to perform a correct segmentation and, this way, fit the contours of the objects and model their inner topology. The main problem is the optimization of the structure to obtain the best possible segmentation. Previous works proposed a greedy local search method that presented different drawbacks, especially with noisy images, situation quite often in image segmentation. This Phd thesis proposes optimization approaches based on global search methods like evolutionary algorithms, with the aim of overcoming the main drawbacks of the previous local search method, especially with noisy images or rough contours. Moreover, hybrid approaches between the evolutionary methods and the greedy local search were developed to integrate the advantages of both approaches. Additionally, the hybrid combination allows the possibility of topological changes in the segmentation model, providing flexibility to the mesh to perform better adjustments in complex surfaces or also to detect several objects in the scene. The suitability and accuracy of the proposed model and segmentation methodologies were tested in both synthetic and real images with different levels of complexity. Finally, the proposed evolutionary approaches were applied to a specific task in a real domain: The localization and extraction of the optic disc in retinal images

    Fully Automatic Deep Convolutional Approaches for the Analysis of COVID-19 Using Chest X-Ray Images

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Covid-19 is a new infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Given the seriousness of the situation, the World Health Organization declared a global pandemic as the Covid-19 rapidly around the world. Among its applications, chest X-ray images are frequently used for an early diagnostic/screening of Covid-19 disease, given the frequent pulmonary impact in the patients, critical issue to prevent further complications caused by this highly infectious disease. In this work, we propose 4 fully automatic approaches for the classification of chest X-ray images under the analysis of 3 different categories: Covid-19, pneumonia and healthy cases. Given the similarity between the pathological impact in the lungs between Covid-19 and pneumonia, mainly during the initial stages of both lung diseases, we performed an exhaustive study of differentiation considering different pathological scenarios. To address these classification tasks, we evaluated 6 representative state-of-the-art deep network architectures on 3 different public datasets: (I) Chest X-ray dataset of the Radiological Society of North America (RSNA); (II) Covid-19 Image Data Collection; (III) SIRM dataset of the Italian Society of Medical Radiology. To validate the designed approaches, several representative experiments were performed using 6,070 chest X-ray radiographs. In general, satisfactory results were obtained from the designed approaches, reaching a global accuracy values of 0.9706 ± 0.0044, 0.9839 ± 0.0102, 0.9744 ± 0.0104 and 0.9744 ± 0.0104, respectively, thus helping the work of clinicians in the diagnosis and consequently in the early treatment of this relevant pandemic pathology.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project; Ministerio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Spain through the postdoctoral grant contract ref. ED481B-2021-059; and Grupos de Referencia Competitiva, Spain, grant ref. ED431C 2020/24; Axencia Galega de Innovación (GAIN), Xunta de Galicia, Spain, grant ref. IN845D 2020/38; CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia, Spain”, supported in an 80% through ERDF Funds, ERDF Operational Programme Galicia 2014–2020, Spain, and the remaining 20% by “Secretaría Xeral de Universidades, Spain ” (Grant ED431G 2019/01). Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481B-2021-059Xunta de Galicia; ED431C 2020/24Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED431G 2019/0

    Deep Multi-Segmentation Approach for the Joint Classification and Segmentation of the Retinal Arterial and Venous Trees in Color Fundus Images

    Get PDF
    Presented at the 4th XoveTIC Conference, A Coruña, Spain, 7–8 October 2021.[Abstract] The analysis of the retinal vasculature represents a crucial stage in the diagnosis of several diseases. An exhaustive analysis involves segmenting the retinal vessels and classifying them into veins and arteries. In this work, we present an accurate approach, based on deep neural networks, for the joint segmentation and classification of the retinal veins and arteries from color fundus images. The presented approach decomposes this joint task into three related subtasks: the segmentation of arteries, veins and the whole vascular tree. The experiments performed show that our method achieves competitive results in the discrimination of arteries and veins, while clearly enhancing the segmentation of the different structures. Moreover, unlike other approaches, our method allows for the straightforward detection of vessel crossings, and preserves the continuity of the arterial and venous vascular trees at these locations.This work was funded by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Axencia Galega de Innovación (GAIN), Xunta de Galicia, ref. IN845D 2020/38; Xunta de Galicia and European Social Fund (ESF) of the EU through the predoctoral grant contracts ED481A-2017/328 and ED481A 2021/140; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, is funded by Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED481A-2017/328Xunta de Galicia; ED481A 2021/140Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Paired and Unpaired Deep Generative Models on Multimodal Retinal Image Reconstruction

    Get PDF
    [Abstract] This work explores the use of paired and unpaired data for training deep neural networks in the multimodal reconstruction of retinal images. Particularly, we focus on the reconstruction of fluorescein angiography from retinography, which are two complementary representations of the eye fundus. The performed experiments allow to compare the paired and unpaired alternatives.Instituto de Salud Carlos III; DTS18/00136Ministerio de Ciencia, Innovación y Universidades; DPI2015-69948-RMinisterio de Ciencia, Innovación y Universidades; RTI2018-095894-B-I00Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047Xunta de Galicia; ED481A-2017/328

    Intraretinal Fluid Detection by Means of a Densely Connected Convolutional Neural Network Using Optical Coherence Tomography Images

    Get PDF
    [Abstract] Hereby we present a methodology with the objective of detecting retinal fluid accumulations in between the retinal layers. The methodology uses a robust Densely Connected Neural Network to classify thousands of subsamples, extracted from a given Optical Coherence Tomography image. Posteriorly, using the detected regions, it satisfactorily generates a coherent and intuitive confidence map by means of a voting strategy.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047Xunta de Galicia;ED481A-2019/196This research was funded by Instituto de Salud Carlos III grant number DTS18/00136, Ministerio de Economía y Competitividad grant number DPI 2015-69948-R, Xunta de Galicia through the accreditation of Centro Singular de Investigación 2016–2019, Ref. ED431G/01, Xunta de Galicia through Grupos de Referencia Competitiva, Ref. ED431C 2016-047 and Xunta de Galicia predoctoral grant contract ref. ED481A-2019/19

    Automatic Identification of Diabetic Macular Edema Using a Transfer Learning-Based Approach

    Get PDF
    [Abstract] This paper presents a complete system for the automatic identification of pathological Diabetic Macular Edema (DME) cases using Optical Coherence Tomography (OCT) images as source of information. To do so, the system extracts a set of deep features using a transfer learning-based approach from different fully-connected layers and different pre-trained Convolutional Neural Network (CNN) models. Next, the most relevant subset of deep features is identified using representative feature selection methods. Finally, a machine learning strategy is applied to train and test the potential of the identified deep features in the pathological classification process. Satisfactory results were obtained, demonstrating the suitability of the presented system to filter those pathological DME cases, helping the specialist to optimize their diagnostic procedures.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047This work is supported by the Instituto de Salud Carlos III, Government of Spain and FEDER funds through the DTS18/00136 research project and by Ministerio de Ciencia, Innovación y Universidades, Government of Spain through the DPI2015-69948-R and RTI2018-095894-B-I00 research projects. Also, this work has received financial support from the European Union (European Regional Development Fund—ERDF) and the Xunta de Galicia, Centro singular de investigación de Galicia accreditation 2016–2019, Ref. ED431G/01; and Grupos de Referencia Competitiva, Ref. ED431C 2016-047

    Multi-Stage Transfer Learning for Lung Segmentation Using Portable X-Ray Devices for Patients With COVID-19

    Get PDF
    [Abstract] One of the main challenges in times of sanitary emergency is to quickly develop computer aided diagnosis systems with a limited number of available samples due to the novelty, complexity of the case and the urgency of its implementation. This is the case during the current pandemic of COVID-19. This pathogen primarily infects the respiratory system of the afflicted, resulting in pneumonia and in a severe case of acute respiratory distress syndrome. This results in the formation of different pathological structures in the lungs that can be detected by the use of chest X-rays. Due to the overload of the health services, portable X-ray devices are recommended during the pandemic, preventing the spread of the disease. However, these devices entail different complications (such as capture quality) that, together with the subjectivity of the clinician, make the diagnostic process more difficult and suggest the necessity for computer-aided diagnosis methodologies despite the scarcity of samples available to do so. To solve this problem, we propose a methodology that allows to adapt the knowledge from a well-known domain with a high number of samples to a new domain with a significantly reduced number and greater complexity. We took advantage of a pre-trained segmentation model from brain magnetic resonance imaging of a unrelated pathology and performed two stages of knowledge transfer to obtain a robust system able to segment lung regions from portable X-ray devices despite the scarcity of samples and lesser quality. This way, our methodology obtained a satisfactory accuracy of 0.9761 ± 0.0100 for patients with COVID-19, 0.9801 ± 0.0104 for normal patients and 0.9769 ± 0.0111 for patients with pulmonary diseases with similar characteristics as COVID-19 (such as pneumonia) but not genuine COVID-19.Xunta de Galicia; ED431C 2020/24Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED431G 2019/01This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project, Ayudas para la formación de profesorado universitario (FPU), grant Ref. FPU18/02271; Ministerio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Grupos de Referencia Competitiva, grant Ref. ED431C 2020/24; Axencia Galega de Innovación (GAIN), Xunta de Galicia, grant Ref. IN845D 2020/38; CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia”, supported in an 80% through ERDF Funds, ERDF Operational Programme Galicia 2014-2020, and the remaining 20% by “Secretaría Xeral de Universidades” (Grant ED431G 2019/01)

    Color Fundus Image Registration Using a Learning-Based Domain-Specific Landmark Detection Methodology

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00 136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095 894-B-I00 research project; Consellería de Cultura, Educación e Universidade, Xunta de Galicia through the predoctoral grant contract ref. ED481A 2021/147 and Grupos de Referencia Competitiva, grant ref. ED431C 2020/24; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%). The funding institutions had no involvement in the study design, in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481A 2021/147Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/0

    Fully Automatic Method for the Visual Acuity Estimation Using OCT Angiographies

    Get PDF
    [Abstract] In this work we propose the automatic estimation of the visual acuity of patients with retinal vein occlusion using Optical Coherence Tomography by Angiography (OCTA) images. To do this, we first extract the most relevant biomarkers in this imaging modality—area of the foveal avascular zone and vascular densities in different regions of the OCTA image. Then, we use a support vector machine to estimate the visual acuity. We obtained a mean absolute error of 0.1713 between the manual visual acuity measurement and the estimated, being considered satisfactory results.Centro de Investigación de Galicia; ED431G 2019/01Xunta de Galicia; DTS18/00136Ministerio de Ciencia, Innovación y Universidades; RTI2018-095894-B-I0
    corecore